The AI research combatting the online spread of false information | Top Universities
113
Views

The AI research combatting the online spread of false information

User Image

Craig OCallaghan

Updated Sep 09, 2024
113 Views

Save

Share

 

Sponsored by Loughborough University 

You don’t have to look too hard for recent examples of how the online spread of false information has significant real-world consequences. Whether you are trying to keep up with the latest political news, or just doing your online shopping, you can’t always be sure that the information you’re exposed to is accurate. 

Innovative research by Dr Nick Hajli from Loughborough University digs deep into the challenge of preventing false information from spreading online, harnessing artificial intelligence to reliably assess the legitimacy of content and creators. 

We spoke to Dr Hajli – who also leads Loughborough’s International Business, Strategy and Innovation group – to learn more about his pioneering research and how AI could be used not only to detect false information, but curtail its spread before it can cause harm. 

What are some of the greatest challenges in tackling the spread of false information online? 

Social media platforms have millions of users continuously generating content. The scale of information flow makes it challenging to monitor and verify content in real-time.  

Disinformation campaigns use advanced techniques, including AI-generated deepfakes, which are increasingly difficult to detect with traditional methods. Distinguishing between genuine users and sophisticated bots is complex due to the bots' ability to closely mimic human behaviour.  

Implementing strict regulations can conflict with the principles of free speech, making it challenging to create policies that are both effective and fair. 

Disinformation can also originate from various sources, including state actors, ideological groups, and individuals with different motivations, complicating the identification of root causes and appropriate responses. 

How does your research address these challenges? 

My research leverages machine learning and text mining to analyse large datasets of tweets, aiming to identify patterns and detect malicious bots early.  

Actor-Network Theory helps in understanding the interplay between human and non-human actors (bots) in social media networks, providing insights into how disinformation spreads and how it can be countered. 

By creating tools that detect harmful social bots, my research aims to mitigate the spread of false information before it gains traction. Investigating the mechanisms through which social bots influence public opinion helps in designing more targeted interventions. 

What can social media companies do to mitigate the spreading of false information? 

Social media companies are increasingly responsible for implementing content moderation policies to curb the spread of false information. They need to invest in more sophisticated detection technologies and human oversight.  

Companies must be transparent about their content moderation practices and accountable for their decisions. This includes clear communication with users about why certain content is flagged or removed. 

In the future, social media companies are likely to enhance their use of AI for real-time content analysis, collaborate with fact-checking organisations, and implement stricter verification processes for accounts. 

Given the rise of deepfaked audio and video, do you believe people can still correctly identify disinformation? 

As disinformation techniques become more sophisticated, it will be increasingly difficult for individuals to distinguish between genuine and false content, particularly with the rise of deepfakes.  

Enhancing digital literacy is crucial. Educating users on how to critically evaluate information sources and recognise disinformation techniques can empower them to make more informed decisions. 

If you were able to make one lasting change to social media platforms, what would it be and why? 

A lasting change would be to implement comprehensive verification systems for both accounts and content. This could involve: 

  • Verified accounts: Stricter verification processes for user accounts to ensure authenticity. 
  • Content verification: Integrating advanced AI tools with human oversight to verify the accuracy of content before it spreads widely. 
  • Transparency features: Providing users with clear indicators of verified information and sources. 

Verification systems can significantly reduce the spread of false information by ensuring that only credible and authenticated sources gain visibility.  

This approach addresses the root of the problem by preventing disinformation from reaching large audiences, thereby reducing its impact on public discourse.